![]() Method of obtaining immersive videos with interactive parallax and method of viewing immersive video
专利摘要:
Method for obtaining immersive videos with interactive parallax comprising determining the shape and size of an area of viewpoints centered around a viewing point at rest, determining the number of scanners forming a set of scanners with respect to the area , scanning of a space by the set of scanners, each of the scanners determines a point cloud, fusion of the point clouds by making them coincide in the same space obtaining a point cloud merged into a special image, obtaining a video by binding together all the special frames at a fixed frequency. 公开号:BE1022580B1 申请号:E2014/5025 申请日:2014-10-22 公开日:2016-06-09 发明作者:Tristan Salome;Coninck Michael De;Chris Mascarello;James Vanderhaeghen;Gael Honorez 申请人:Parallaxter; IPC主号:
专利说明:
Method of obtaining immersive videos with interactive parallax and method of viewing immersive videos with interactive parallax The present invention relates to a method for obtaining immersive videos with interactive parallax. It also relates to a method for viewing immersive videos with interactive parallax. A video game said in the first person is interesting compared to our invention. In these first-person video games, a user moves a virtual character into a virtual 3D space or scene. Moving is usually done through the keyboard and mouse. The video game calculates in real time the rendering of the 3D scene from the point of view of the virtual character and this rendering is displayed on the screen of the computer. This allows the user to move into the virtual world of video games and interact with this world. Computer screens are often set to a minimum of 60 frames per second (60 Hertz), implying that images are rendered in 1/60 second of a second to be in real time. In the example of the video game in the first person, the user moves the mouse horizontally on his table and sees on his screen the image change according to a rotation of the point of view of his character, and therefore the virtual camera in the 3D scene proportionally to the movement of his mouse. Then he presses a key on the keyboard and the point of view moves forward, for example. These simple interactions already allow some users after a few minutes of play, to feel in the skin of the virtual character of the video game. The virtual immersion sensation is to give the impression to the user to be physically physically in the virtual 3D scene. This sensation can be more or less strong depending on the adequacy between what is perceived by the senses of the user and what would be perceived if the user was actually in the physical equivalent of the virtual 3D scene (in the real world). So we have at one end of the immersion spectrum the example of the first-person video game on a computer screen with a mouse and a forceps, and at the other end of the spectrum, a computer "directly connected" to the nervous system of the user, this can be seen in the sci-fi movie "The Matrix". In this film have seen a virtual immersion system where the illusion is so perfect, that the user is not even aware of being in a virtual 3D scene and thinks, she feels, to be in reality. The sensation of virtual immersion is therefore total. The sensation of virtual immersion can be more or less strong. Virtual systems are commonly called systems that present to the user's eyes images that are consistent with the rotational movements of his head, and that allow the user through controllers (keyboard, joystick, etc.). ) to control their movement in a virtual 3D scene. The most common technique used to obtain a virtual reality system is a virtual reality headset. The user wears the headphones on their head, and the headset is connected to a computer. The helmet through screens and sets of lenses placed in front of the eyes of the user presents to each eye computer generated summary images in real time. The helmet also includes a sensor for measuring the orientation of the head of the user. The principle is as follows: the user turns His head, the virtual reality headset perceives this head movement and sends the information on the new orientation of the head of the user to the computer, the computer renders stereoscopic virtual 3D scene with an orientation of the two virtual cameras corresponding to the new orientation of the user's head, the images rendered in real time by the computer are displayed in front of the eyes of the user. Different factors will influence the quality of the immersion experience at the visual level. The main factor is the adequacy between the user's head movement (measured by his inner ear) and his vision. We are accustomed in reality, the real world, to a perfect match between these two senses. Depending on the degree of inconsistency between the images seen by the eyes and the movements felt by the user's inner ear, it will feel a slight sensation of discomfort, visual fatigue, migraines, a feeling of discomfort and reversal of stomach up to vomiting. These effects are called virtual reality sickness or "Cyber-Sickness" and are similar to seasickness. Immersive videos are prerecorded or pre-calculated stereoscopic movies that cover a 360-degree field of view around the user. These immersive videos can be viewed through a virtual reality headset. The virtual reality headset measures the orientation of the user's head and allows the computer to return to the headset display the right and left images corresponding to that orientation. As in the case of immersive videos, the images are prerecorded or pre-calculated, so they are not calculated in real time. So instead of 1/60 of a second for example for the calculation of an image, it could be calculated in more than one hour. This makes it possible to have an image quality much higher than that of virtual reality. When we move our point of view, images of objects that are close to us move faster than images of objects that are far away. We can clearly see this effect when we are in a moving train and we look out the window, we see the near barriers scroll very quickly, while the distant mountain appears almost fixed. This effect is called parallax. Taking into account the translational movements of the head of the user in a virtual reality headset, will give the point of view of the vision in the helmet a parallax effect. The taking into account of the movement of head in the images induces a parallax. We speak of interactive parallax as opposed to a parallax that could be described as passive, which would be linked to the displacement in an immersive video of the point of view envisaged by the director of the immersive video. Existing immersive video systems do not account for the user's head translation motions and therefore can not provide interactive parallax. This limitation greatly restricts the immersion quality of these systems. Indeed, the brain of the user expects to perceive parallax when he moves his head, but does not perceive. This lack decreases the comfort of vision of the user and increases the risks of "Cyber-Sickness". Virtual reality systems make it possible to have this interactive parallax. But the problem of known virtual reality systems is that they require large memory and processing capabilities to store and process a huge amount of data to realistically represent a scene and that the user can move in the scene with a sensation of reality. So there is a need to take a step further in the immersive quality of immersive video, while retaining the rest of the benefits of immersive videos, primarily the quality of the images. Be sure to offer immersive video systems that improve the feeling of immersion with the interactive parallax of the user while reducing the amount of data stored to reproduce the scene. To this end, the method for obtaining immersive videos with interactive parallax according to the invention comprises various steps: a) determining the shape and size of a point of view zone centered around a resting vision point, b) determining the number of scanners forming a set of scanners with respect to the viewpoint area; c) scanning a space via the set of scanners each of which determines a scatterplot d) merging the point clouds into the same space to obtain a merged point cloud, f) encoding the merged point cloud into a special image, and g) obtaining a video by binding together all the special images at a specific frequency, in particular twenty-five frames per second. The method for creating immersive videos with interactive parallax uses pre-calculated synthesized images or real shots, thus ensuring better image quality compared to the real time of virtual reality. The inclusion of interactive parallax in immersive videos allows the user to experience a feeling of total or near total immersion that tends to a feeling of being really in reality, in the real world. The term special accompanies images encoding point clouds so as not to confuse them with the rendering images that are presented to the user. The viewpoint area limits the amount of information that must be stored in order to reproduce a scene, including the scene as the virtual environment or a real world space. This limitation of information allows to have a quantity of information, data, geriatrics. The configuration, size and shape of the viewpoint area predetermines the number and arrangement of scanners that are used to scan or scan the scene. This method of obtaining immersive videos with interactive parallax enables the promotion of brands by immersive advertising films. As the feeling of immersion is strong, the user feels like living the commercial, and therefore the impact of the product or brand on the user is very strong. Immersive films of fiction, or scenes of action, adventure, horror, erotic will give thrills because they will be felt as experienced by the user thanks to the high degree of immersion. For example, the immersive film with interactive parallax allows the user to be in the place of R2D2 above the spacecraft of "Luke Skywalker" during the attack of the black star in "Star Wars". Or live the landing in the movie "Private Ryan" alongside US soldiers. Immersive video can be applied to the "ride" of amusement parks. The principle of the ride is to be taken, usually in a seat through different decorations. It is comparable to the roller coaster, but the emphasis is more on a visit of scenery than in the sensations of acceleration. A typical example is the visit of the "Haunted House" to "Disney Land". In immersive video with interactive parallax, a large amount of "ride" experiences can be created entirely virtually. In this case, the system can be coupled to seats on cylinders for example, to also reproduce the acceleration sensations related to the displacement of the point of view area. In particular, the user can be transported in a real nacelle (for example roller coaster or ferris wheel) the displacement of the nacelle being sent to the visualization system to synchronize the unfolding of the immersive film with parallax interactive with the displacement of the nacelle. Preferably, in the method of obtaining immersive videos with interactive parallax according to the invention, said method comprises the elimination of the redundant points in the merged point cloud to obtain a filtered point cloud or by determining an apparent surface of an object either by setting an order of priority for the set of scanners, or a combination of both methods. The filtering step in the method is preferable but not essential. This processing step allows to filter and keep only the points that give useful information. This reduces the amount of information that must be stored to produce immersive video with interactive parallax. Advantageously, in the method for obtaining immersive videos with interactive parallax according to the invention, said zone of viewpoints comprises a predetermined volume in which are understood the positions attainable during the translation movements of the head of a user when that It stays in a fixed position. This makes it possible to take into account the displacement of the user's head in a restricted volume. The point of view zone corresponds to the latitude of translation movements of the head that the user naturally has around his rest position. That is to say, when he is sitting or standing without moving. Preferably, in the method for obtaining immersive videos with interactive parallax according to the invention, all the scanners can be either virtual scanners or physical scanners. The scene can be reproduced via virtual or physical scanners, the latter in the same direction or a film director with his camera the scenes of his film. The use of virtual scanners makes it possible to increase the accuracy and quality of the images because the arrangement of the scanners of the set of scanners can present a large number of scanners without problems of size in order to sweep the scene with great precision. This number of virtual scanners can be adapted according to the needs of precision according to His stage and to the desires of the director. According to an advantageous form of the immersive video method with interactive parallax according to the invention, the set of scanners may be color scanners in which each scanner has a field of view of 360 degrees horizontally and 180 degrees vertically. The scanners scan the scene covering it completely, in all possible directions. In particular, according to one embodiment of the method for obtaining immersive videos with interactive parallax according to the invention, at least the end points of the point of view zone present a scanner. To be able to scan the scene with a sufficient definition it is necessary that the extreme points of the point of view zone are covered, which means that if the point of view zone has a parallelepiped shape, at least the parallelepiped's vertices need to have a scanner, in this example we would need eight scanners to get sufficient accuracy of the scanned scene. The number of points kept by the scanners at the extreme points of the point of view zone is preponderant, but the number of points kept for the interior scanners is very limited, or almost nil. Preferably, in the method of obtaining immersive videos with interactive parallax according to the invention, for each scanned point at least a distance and a color of this point with respect to the central point of the scanner is obtained. As the direction of each point is known, the direction and the distance of the point thus make it possible to reconstitute its three-dimensional position. Advantageously, in the method for obtaining immersive videos with interactive parallax according to the invention, a depth of a scanned point is the distance between the central point of the scanner and said scanned point. The general principle is that a gap of 5 cm deep at a distance of 1 Km will not be discernable from any point of view within the viewpoint area, however a gap of 5 cm to 50 cm away will be strongly. So we can calculate for each distance to the scanner, what is the difference in depth admitted without it being discernable from another point in the area of views. Preferably, in the method of obtaining immersive videos with interactive parallax according to the invention, the apparent surface of an object determines a scan quality of the set of scanners by retaining the point corresponding to the set of scanners having the lowest apparent surface. The smaller this area, the more the scanner has a detailed view of this portion of the object and the better the scanning quality is. Preferably, the transformation of the fused point cloud is ecospheric. This makes it possible to limit the number of points of the special image with respect to a given definition. The points at the poles are not scanned with the same angular degree of longitude as at the equator to avoid redundant points and thus limit this number of points. The scan with the smallest angular step of longitude is the sweep at latitudes of the equator, in other latitudes the sweep at a lower angular step of longitude. The ecospheric method preserves the principle of the encoding of the latitude in the ordinate of the image, and the longitude in the abscissa of the image, as for the equirectangular encoding, but the relation between the longitude and the abscissa is no longer linear. Therefore, for each line of the image (each latitude), the equivalent circumference of the circle represented by this line is calculated. As a line in the image represents a horizontal section of the sphere, it gives a circle on the horizontal plane of section. The method makes it possible to have a good homogeneity of the surfaces corresponding to each of the pixels of the image obtained by the scanning of the scene completely covering all the firing directions. According to an advantageous form of the method of obtaining immersive videos with interactive parallax according to the invention, the scene is scanned by the set of scanners from a first position at first, and is scanned by the set of scanners. from a second position in a second time. One can visualize the system as recreating around the user at the time of visualization a 3D virtual scene for each fraction of time of the immersive video. Each of these ephemeral virtual 3D scenes is limited to what the user can see from their point of view area. The evolution of the appearance of these scenes corresponds to the movements of the objects or characters in the video and to the displacement of the position of the zone of viewpoints controlled by the director of the film. So, unlike virtual reality or, when rendering in real time, a camera is moved in a 3D scene when the user moves. In the invention is the 3D scene that moves around the user when the point of view area has been moved during the creation of the immersive film with interactive parallax. For the purposes of the invention the method for viewing immersive videos with interactive parallax includes for each of the special images of an immersive video with interactive parallax: a) the determination of a position and an orientation of the eyes of a l , by sensors and the use of head motion prediction algorithms, b) determining a portion of a consolidated point cloud or a scattered point cloud at a viewing angle of the user, c) loading the portion of the consolidated point cloud or filtered point cloud visible to the user, d) rendering two images of the consolidated point cloud or filtered point cloud in real time loaded, and e) the presentation of the rendering in the eyes of the user. The density of loaded points is consistent with the definition of the virtual reality headset display. Note that the rendering of the point cloud is not lit, the colors encoded in the 3D points are directly those that will be presented to the user, there is no lighting. Advantageously, in the immersive video viewing method with interactive parallax according to the invention, all the points of the consolidated point cloud or the filtered point cloud are loaded. It is preferable to work with the filtered point cloud in order to limit the amount of information, to avoid redundant points in the visualization of a scene. A sufficient amount of dots provides an adequate definition of images and video to meet the need of the user for the sensation of total or near total immersion. Preferably, in the immersive video viewing method with interactive parallax according to the invention, the position and orientation of the eyes are given by sensors located in a virtual reality helmet. Advantageously, in the immersive video viewing method with interactive parallax according to the invention, the presentation of the rendering to the eyes is performed via the virtual reality headset. The helmet through screens and lens sets placed in front of the user's eyes presents to each eye a real-time rendering of the scatter plot corresponding to the current special image of the immersive video with interactive parallax. Advantageously, in the immersive video viewing method with interactive parallax according to the invention, a user is transported in a nacelle, the displacement coordinates of said nacelle being sent to a display system for synchronizing the unfolding of an immersive film with interactive parallax with the displacement of the nacelle. The user thus moved shows a feeling of total immersion. The invention will now be described in more detail with the aid of the drawings which illustrate a preferred embodiment of the immersive video method with interactive paraiiaxe and the method of immersive video viewing with interactive parallax. In the drawings: Figure 1 illustrates a volume representing a viewpoint area in which a user can move the head to the right and to the left, Figure 2 illustrates the position of the scanners with respect to the viewpoint area. predefined, in this case a parallelepiped, Figure 3 illustrates the spherical representation of a scanner Figure 4 illustrates a scatter plot after scanning a space, Figure 5 illustrates a scene and a visible part of the scene scanned by 6 illustrates a concept of accuracy of the scanned space with respect to the viewpoint area, Figure 7 illustrates the scanning of a scene and a consolidated point cloud. filtered point cloud, FIG. 8 illustrates an apparent surface filtering method, FIG. 9 illustrates the ordered visibility filtering method, FIG. 10 illustrates a sampling of a pixel of an image sp. 11 illustrates a grouping of the samples for the pixel of the special image, FIG. 12 illustrates a concept of depth, FIG. 13 illustrates that a density of the point cloud is coherent with the distance to the dot area. FIG. 14 illustrates a concept of occlusion of scanners, FIG. 15 illustrates a diagram of an immersive video with interactive parallax, FIG. 16 illustrates the ecospheric representation of a scanner, an ecospheric projection of two latitudes, and the encoding of the two projections in an image, and Figure 17 illustrates the concept of displacement of the point of view zone. In the drawings the same reference has been assigned to the same element or a similar element. Computer programs can simulate the equivalent of a complete movie studio with sets, lights and cameras. We then speak of three-dimensional objects, lights and virtual cameras, these elements do not exist in the real physical world, they exist only as a simulated representation in the computer. An example of a computer program of this type is the software "Maya" of the company "Autodesk". All these virtual three-dimensional elements (objects, light, camera) is called the virtual 3D scene, or more simply the 3D scene or virtual space 3. Once the virtual space 3 is defined, the computer can calculate an image I corresponding to what the virtual camera sees in the virtual space 3 taking into account the objects and lights present in this space 3 and the position of the virtual camera. This calculation is called rendering of the virtual space 3 and the image I resulting from this rendering is a synthetic image. Our two eyes perceive the real physical world according to two slightly different points of view, they are spread on average of 6.5 cm in the adult. This distance is called interocular distance. This slight shift of perspective on the same real scene, allows our brain to define how far are the objects around us. We perceive then the relief of our world. When calculating images I simultaneously from two cameras separated by an interocular distance, stereoscopic rendering is performed. If we strongly deviate the two cameras at the time of rendering or recording, for example 65 cm instead of 6.5 cm, when presenting these images I in front of the eyes of the viewer, it will have the impression of having the vision of a giant watching a very small scene. A big house will give the impression of being a doll's house for example. Conversely, if the cameras are very close, say 6.5 mm, then the viewer will have the impression of being a lilliputian watching a giant scene. A small pebble will give the impression of being a cliff. The perceived size of the viewer applies as well to the stereoscopic rendering in images I of syntheses that has a real recording with real cameras in stereoscopic shooting. The rendering software can take into account the movements of objects, lights and virtual camera. If one then asks him to make successive renderings at different moments in time, the images I rendered will be different and one obtains a film in images I of syntheses. In the context of traditional cinema, a second of action is decomposed into 24 fixed I frames, and therefore to create a film in synthetic images 1 for a diffusion in the cinema, it will be necessary to calculate 24 images I per second of action in the movie. Pre-calculated synthesis images I are used when the various I-images of the film are first rendered and stored, and then replayed at the rate corresponding to the broadcast medium (24 I-frames per second for traditional cinema). The calculation of each synthetic image I can take a long time to obtain a good image quality I. In most cases, the rendering lasts more than one hour per image I. So, it is typical that a computer calculates for a whole day (24 times 1 hour) the equivalent of one second of film (24 frames per second). We can define the realism of the synthetic images I as follows, the more the image I rendered by the computer of a virtual space 3 approaches a photograph that would be taken in the physical equivalent of the space 3 , the more the rendering is realistic. Note that at the moment many spaces 3 can be rendered in pre-calculated perfectly realistic way. So much so that these renderings are indistinguishable from real photos. If the computer is able to render each image I at the same rate as the images I are broadcast, then it is said that the rendering is done in real time. Still in the example of cinema at 24 frames per second. For the film to be rendered in real time, this implies that each image I is computed in 1 / 24th of a second at the most. The realism of the rendering and the maximum complexity of the space 3 are much higher in the case of the pre-computed synthetic images I compared to the synthetic images I calculated in real time. But calculating the I images in real time allows a reactivity of the I images calculated with respect to the actions of the user 1, which opens the way to video games and virtual reality. The sensation of virtual immersion consists in giving the impression to the user 1 to be physically physically in the virtual space 3. This sensation can be more or less strong depending on the adequacy between what is perceived by the users. meaning of the user 1 and what would be perceived if the user 1 was actually in the physical equivalent of the virtual space 3. The sensation of virtual immersion can be more or less strong. Virtual systems are commonly called systems which present to the user 1 eyes images I which are consistent with the rotational movements of the user's head 1, which allow the user 1 by controllers ( keyboard, joystick, etc.) to control their movement in a virtual space 3. The most common technique used to obtain a virtual reality system is a virtual reality headset. User 1 wears the headphones on his head and the headset is connected to a computer. The helmet through screens and sets of lenses placed in front of the eyes of the user 1 presents to each eye synthetic images I calculated in real time by the computer. The helmet also includes a sensor for measuring the orientation of the head of the user 1. The principle is as follows, the user 1 turns his head, the virtual reality headset perceives this head movement and sends the information on the new orientation of the head of the user 1 to the computer, the computer makes a stereoscopic rendering of the virtual 3D scene with an orientation of the two virtual cameras corresponding to the new orientation of the head of the user 1, the images I rendered in real time by the computer are displayed in front of the eyes of the user 1 Note that modern virtual reality headsets such as those built by the company "Oculus" allow to take into account not only the orientation of the head of the user 1, but also its position, and that the motion sensors of the head of the user 1 can be multiple, some on the virtual reality headset and others attached to the computer for example. Different factors will influence the quality of the immersion experience at the visual level. The main factor is the adequacy between the head movement of the user 1 (measured by his inner ear) and his vision. We are used in reality to a perfect match between these two senses. According to the degree of inconsistency between the images I seen by the eyes and the movements felt by the inner ear of the user 1, it will feel a slight sensation of discomfort, visual fatigue, migraines, a feeling of malaise and upset stomach up to vomiting. These effects are called the "virtual reality hurt" or "Cyber-Sickness" and are similar to seasickness. The inner ear / vision adequacy is influenced by the quality of the head motion sensor, plus the sensor is accurate, at best. The delay time between the measurement of the movement by the helmet and the display of the corresponding images I in front of the eyes of the user 1. The shorter the time, the better. The quality of the algorithm for predicting head movements. The delay of which we have just spoken, may be diminished, but can not be completely suppressed. To overcome the problem of delay, modern systems of virtual reality, make a forecast of the orientation of the head at the moment when the images I will reach the eyes of the user 1. The better the forecast, at best. Take into account not only the orientation of the head, but also its position. We constantly make small movements of head translation that are captured by our inner ear. If these changes in head positions are taken into account in the calculation of the I-images, the "Cyber-Sickness" is greatly reduced and the quality of immersion experience increased. Other important elements influence the quality of immersion. The quality of stereoscopy. If the two images presented to each eye of the viewer present inconsistencies that make no object could in reality give this pair of stereoscopic image, it will hinder the user 1. His brain will reinterpret the best I stereoscopic images which are presented to him, but he will perceive the objects as fuzzy and feel a visual gene that can go from simple minor discomfort to headaches. The angle of vision of the helmet. More image I presented to the eyes of the user 1 covers a large part of the field of view of the user 1, at best. If the field of view is too small, the user 1 will have the impression to see the virtual space 3 in front of these eyes with blinkers or through a murderer. If on the other hand, the field of vision is wide and covers a good part of the peripheral vision of the user 1, the feeling of immersion will be stronger and more pleasant. The refresh rate of the display, (the number of I frames per second that are presented to user 1.) The higher this is, the better the rendering realism. 3 is perceived as realistic, the more user 1 can believe in. The definition of the images I presented to the user 1 via the headset The more I images are defined (less pixelated), the better the quality of the images. I. If the images I have defects, such as fuzzy areas for example, the user 1 will have more difficulty to believe in the immersion The vision of his own body by the user 1 when looking in one direction Or he should see his body (down typically) This of course amplifies the feeling of presence in the scene of the user 1 because it is also seen in the space 3. When recording a video, the camera records the action taking place right in front of it and on the sides to the limit of its field of vision. This field of view is expressed in degrees and gives the total viewing angle covered by the camera. For example, for a field of view of 90 degrees horizontal, the right edge of the image I will show the objects located at 45 degrees from the line of sight of the camera. The field of view is defined by the horizontal and vertical viewing angle of the camera. In the particular case of a spherical video, the field of view of the camera is 360 ° horizontal and 180 ° vertical, the field of vision is total because the camera sees in all directions. A stereoscopic video simultaneously records for the same space 3 (virtual or real) two videos, each according to a distinct point of view. These two distinct points of view will correspond to the two eyes of the viewer, a video called right will be presented only to the right eye of the viewer and the other video, called left, will be presented only to the left eye of the viewer . By this method of recording and restitution of the stereoscopic video (or more precisely of the pair of stereoscopic video), the spectator will be able to perceive relief in the video. Stereoscopic spherical video has the characteristics of spherical video and stereoscopic video. I! This is a couple of videos, one for the right eye and the other for the left eye. Each of these two videos covers the complete spherical field of view from 360 degrees horizontal to 180 degrees vertical. One way to get a stereoscopic spherical video is to take several camera couples and connect together all the images i coming from the right camera to get the right video, and do the same thing on the left camera images to make the left video. Several manufacturers offer camera fixtures of this kind from Go Pro camera assemblies for example. Cylindrical video (stereoscopic or not) is equivalent to a spherical video when the vertical viewing angle is not complete. For example from 360 degrees horizontal to 90 degrees vertical. By coupling a virtual reality headset with a stereoscopic spherical video, we obtain an immersive video system. The virtual reality headset measures the orientation of the head of the user 1 and transmits it to the computer. The computer extracts from each of the two stereoscopic spherical videos the portion of the video that corresponds to the field of view of the new orientation of the user's head 1. These two pieces of video are displayed in front of the user's eyes. . These systems have immersion characteristics, for example the match between the rotational movements of the head perceived by the inner ear and the images arriving in the eyes of the user 1, and the perception of the relief of the scenes presented to the user. user 1. The system has characteristics specific to a video, a predetermined action by the director of the video takes place in front of the eyes of the user 1. In the case of the invention, the action takes place around the user 1. The position of the user's point of view is also predetermined by the director of the video. Just like for a normal video, the director moves the recording camera or he wants it at the time of recording (or rendering for pre computed synthesis images). The result for user 1 is an immersive video, where he feels immersed in the action of the film presented to him. Immersive video can use real shot, offline or live images. Immersive video can also use pre-calculated computer generated images. Virtual reality, by its very principle, uses only images calculated in real time. Immersive video is not limited in the quality of its images, nor in the complexity of the scenes shot, they can include an infinity of objects. Virtual reality, being dependent on rendering in real time, is limited in the quality of its images and in the complexity of the virtual 3D scene. The traditional immersive video, taken from a single point of view, does not take into account the translation movements of the head of the user 1. The virtual reality takes into account the translation movements of the user's head 1 and use this information to move the virtual rendering cameras in the virtual space 3. Taking account of these translational movements of the head in the immersive video according to the invention gives a better quality of immersion and reduces the risks of "Cyber-Sickness". The immersive video does not allow the user 1 as we have already noted, to move freely in the filmed scene. It is the director of the film who has previously chosen the movement and the position from the point of view of the user 1 in the space 3. In virtual reality, the user 1 can use interfaces to ask the computer to move your point of view in the virtual space 3. Note that the ability to move freely in space 3 is not necessarily an advantage and depends on the context of use. For an engineer who wants to visit installations in virtual reality and who has learned to master the interface of displacement, it is an advantage to be able to move wherever he likes and to linger on tei or such detail and to go and come in different parts of the virtual place. On the other hand, in other cases, the fact of being able to guide the user 1 according to a predetermined path and that the latter has no training to perform in order to use the system is rather an advantage. Indeed, mastering the virtual reality shifting interface can be a bit difficult and can lead to frustration on the part of some users 1 or even deter others from attempting the experiment. One could take as a comparison the place of the driver and the passenger in a car. In the case of virtual reality user 1 is the driver of the car. In the case of immersive video, user 1 is the passenger of the car, he does not control the movement of the car, but he can watch where he wants. When we move our point of view, images I of objects that are close to us move faster than images I of objects that are far away. We can clearly see this effect when we are in a moving train and we look out the window, we see the near barriers scroll very quickly, while the distant mountain appears almost fixed. This effect is called parallax. The fact of taking into account the translation movements of the head of the user 1 in a virtual reality helmet, will give from the point of view of the vision in the helmet a parallax effect. The term interactive parallax is used to describe the fact that the translation movements of the user's head 1 are taken into account by the system. The taking into account of the movement of head in the images induces a parallax. We speak of interactive parallax as opposed to a parallax that could be described as passive, which would be linked to the immersive video displacement of the point of view by the director of the immersive film. The pre-recorded video clips may either have been previously shot with a suitable shooting system or have been pre-computed into synthetic I-frames. An immersive video shot live and thus directly sent to the user 1 allows the user 1 to live a tele-presence experience. The telepresence gives the illusion to the user 1 to be present in a different place, as in the case of the immersive film, but to see there events that take place at the moment. Note that there is a delay of transmission and processing impossible to delete between the moment when the immersive film is recorded and the moment when it is seen by the user 1. But on the other hand, the local reaction loop between the movements of the user 1 and the modification of the images l presented to him is very fast. This makes the inner ear / vision match very good and the feeling of immersion is strong. One could say that user 1 has a strong immersion experience in a scene that has just taken place a few tenths of seconds or even seconds depending on the distance of the recording. The invention makes it possible to take into account the displacement of the head of the user 1 in a restricted volume that we call the ZPV point of view zone, illustrated in FIG. 1. The ZPV point of view zone corresponds to the latitude head movement that the user 1 naturally has around his rest position, while the user does not move. This resting position corresponds to the position of the head of the user 1 when he stands straight and relax, without bending or lifting or lowering. The latitude of movement corresponds to the positions attainable by the head of the user 1 normally, without taking steps in the case of the standing position and without getting up, nor moving his seat in the case of the sitting position. The exact size of the ZPV viewpoint area and its geometric shape, may change depending on the position intended for user 1: it can be either sitting, lying or standing. To give an idea of dimension, for example, an area of 50 cm in height and 1 meter in depth and 1 meter in width is sufficient to delimit the potential positions of the head (and therefore the eyes) of the user 1. point R is the center point of the ZPV point of view or vision zone, ie the point located between the eyes of a user 1 when he is in a position of rest relative to the experience. By experience, the type or story of the film, let us take the example of the person sitting in the car as a passenger. The passenger can only move his head to look in the direction he wants but he does not move his body, he remains seated in a rest position, normally seated. As part of the immersive video with interactive parallax, we bet ZPV points of view, where we would talk about the position of the camera for a classic film (or even for a stereoscopic spherical film). In the case of immersive videos with interactive parallax, the movements of the head as long as the user 1 remains in his sitting position (always in the analogy of the car) will be taken into account. 3D scanning or scanning involves scanning an area of space 3 around the scanner on a regular basis from a central point C in all directions, covering 360 degrees. Thus C is the central point of a scanner. Thus, to facilitate the representations of a set of scanners that corresponds to the configuration, size and shape of the ZPV point of view area, a single scanner is illustrated in the figures by its central point C. The scanning generally has the form of a grid. For each point of the grid, the scanner obtains the distance d from this point P to the central point C. Since the direction of each point P is known, the direction and the distance d of the point P make it possible to reconstitute its three-dimensional position in the space. Many different technologies are currently available to perform 3D scans or scans. These include Lidars scanners, triangulation systems with structured light sources, and time-of-flight cameras. Other technologies do not strictly speaking 3D scans, but can make it possible to calculate the depth p of the points P from at least two images I by triangulation. I images are taken by light field cameras or stereoscopic cameras. The number of spherical 3D scanners required may be variable, while the extreme points, the vertices, of the ZPV viewpoint area are covered. We can use, for example, 9 scanners, each with a center C, a scanner in the center of the ZPV point of view zone (thus at the point of rest R) and the other eight at the corners, the vertices, of the rectangular parallelepiped representing the zone of views ZPV. This configuration of the rectangular parallelepiped is shown in Figure 2. These scanners form a set of scanners 2. To obtain sufficient information to support the interactive parallax at the time of viewing, a set of scanners 2 is used and not a 3D scanner individual spherical. The term scanners is used in the description of the invention for a set of virtual 3D scanners 2 or physical, which perform scanning or scanning in all directions, 360 degrees. Another configuration for the scanners is a diamond configuration (not shown) employing seven scanners, a scanner in the center of the ZPV point of view zone, therefore at the rest point R, a scanner at the front extreme point (front vertex), one at the rearmost point, and similarly, another on the right, one on the left, and finally one above and one below the resting position R. In practice, in pre-calculated synthesized images, preferably much more than nine scanners 2 are used. By sweeping the space inside the parallelepiped ZPV viewpoint area which measures 105 cm deep and 45 cm high, with a pitch of 15 cm, presents a total of 8 x 8 x 4 = 256 scanners 2 in total, this is also visible in Figure 2. This shape and size of ZPV point of view is good enough that the head movements are taken into consideration during scanning. Thus, care must be taken to ensure that the ZPV viewpoint area is large enough, but too large, which would mean being in standard virtual reality mode. This ZPV point of view zone therefore makes it possible to limit the number of points that will be stored in order to limit the storage capacity of these immersion systems, and make them manageable compared to systems in virtual reality that need to be stored. huge amount of information to get a level of detail equivalent to our invention In the method according to the invention, only the information of the points which are useful for representing the scene from the ZPV point of view area are stored. Returning to the example of the use of 256 scanners, it is noted that it will not be necessary, in order to obtain sufficient scanning precision, to use all the points of all the scanners located in the internal part of the parallelepiped. The ZPV viewpoint area may be represented by a central point R to facilitate explanation, description and representations. But, this area is always a volume. In Figure 2, there is illustrated the scanners at the vertices of the parallelepiped by points, one of them has the reference C-i, and two other scanners C3 and CB located in an outer side of the parallelepiped. 256 scanners is a number used for ease compared to encodings in computer systems, but another number of scanners 2 can be used. Note that the methods of point selection, and therefore of filtering, which we will use later on, make the number of points kept by the scanners 2 at the extreme points 7 of the ZPV point of view zone are very important. By cons, the number of points kept for indoor scanners 2 is very limited, see almost zero. Therefore, the fact of greatly increasing the number of scanners 2 does not affect at all proportionally the number of points that will be retained in the end, after the treatment of the scans. Just as it is defined for a camera, a scanner also has a field of view, or scan fields that can be expressed in horizontal and vertical degrees. Some tripod-mounted 3D physical scanners have almost total spherical scanning fields, almost 360 degrees by 180 degrees, and are called spherical 3D scanners, as can be seen in Figure 3. Color scanners are scanners that give for each point P scanned the distance d of this point P but also its color c. Preferably in the preferred embodiment, color spherical virtual 3D scanners are simulated. For each color scanner a virtual camera is created, which will scan space 3 around it in all directions (360 degrees by 180 degrees) so the equivalent of a spherical camera. And for each direction, the computer will calculate the first point P intersected in space 3 by a radius coming from the center C of this virtual spherical camera and the distance d with respect to the virtual center C and the color c of the point P touched by the radius and store this information in a file. The color c of this point P is calculated in the usual way for the synthetic images I as if the firing radius was a radius of vision of a virtual camera. The computer therefore takes into account to calculate the color c of the touched point P of the texture and appearance of the virtual object touched, the virtual lights in space 3 and their rebounds as well as the position of the camera virtual spherical. Figure 4 illustrates that the raw result of a scan or a 3D scan is a cloud of 3D points, which is also just called a cloud of points 9. The point cloud 9 is composed of all the points P that have been scanned or scanned. The spatial position of each point P is known with respect to the scanner C 2 and the color c of the point P is also defined. In virtual reality, the colors and luminosities corresponding to the surface of the virtual objects seen by the user 1 must be calculated in real time according to the lighting, the appearance of the virtual object and the position of the two cameras The invention also uses visualization rendering in real time, but it is performed on a cloud of points 9 whose colors c have already been pre-calculated or recorded. No light source is taken into account when rendering in real time the part concerning the visualization method of the invention. Thus all the interaction calculations with the light have already been made or directly recorded and stored in the color c of the points P of the 9 color point clouds. This makes it possible to have a very high quality of rendering, as for stereoscopic spherical immersive films. FIG. 5 illustrates a comparison between the information present during a virtual reality scene 10 and a scene 10A of an immersive video with parallax. The virtual reality scene 10 in which user 1 is located is complete. That is, when the user 1 moves in a virtual room, represented in this example by the scene 10, all the objects of the room are loaded into the scene 10. While following the only those elements that are potentially visible from the ZPV viewpoint area are loaded at a given time. In Figure 5 the thicker lines of the 10A scene show the part of the scene elements that are potentially visible from the ZPV viewpoint area. Thus, only the left side of the rectangular shape is visible, while the right side of the outline of the scene 10A is invisible for the predetermined ZPV point of view. This makes it possible to reduce the size of the point clouds 9 and makes it possible to reduce the need for memory capacity in order to record the information of the point clouds 9. FIG. 6 illustrates the concept of precision of the point cloud 9 in the space 3. Still in the example of the virtual room 10, in virtual reality, the precision of the modeling of the virtual objects in the virtual room is homogeneous. That is, the accuracy in model details will be the same for all objects in the virtual part. In the case of the invention, the objects close to the ZPV point of view zone have much more precision than the distant objects 12. Thus, the point cloud 9 of the near object 11 has more points result of the scan that Se point cloud 9 of the distant object 12. Following the example in Figure 6, the point cloud 9 of the near object 11 has 9 points, while the point cloud 9 of the distant object 12 only shows 3 points. Thus, the accuracy is variable depending on the position of the area of views ZPV, the accuracy of the same object can be very large for a moment in the immersive film and very low at another time. This as in nature, the objects that are close to the ZPV point of view area have good resolution and the distant objects have a lower resolution, it all depends on the distance between the object and the ZPV viewpoint area. Figure 7 illustrates the scanning of a scene with respect to two scanners Ci and C2. We obtain a first 4 cloud of points when scanning the space 3 with the scanner C-ι. A second cloud of points is obtained when scanning the space 3 with the scanner C2. The clouds 4, 5 points are different, they represent the points seen according to the scanner Ci, C2 selected. So, the scanner C-ι can only see the horizontal part of the scene in the shape of a rectangle, while the scanner C2 can see the same horizontal area as Ci and also the vertical side on the right in the figure. After obtaining point clouds 4 and 5, it is then necessary to consolidate these different scans by moving and orienting them each in space in a manner consistent with the scanning positions and orientations. This will make it possible to obtain a cloud of points 6 sufficiently dense and complete to extract the geometry of the elements of the scanned space 3. The consolidation of the various scanned point clouds is done simply by taking for each point P scanned by a scanner 2 the source point of its shot C, the direction of its shot and the depth p of the point P. This makes it possible to find the Cartesian position Χ, Υ, Ζ in the space of all points and associate them with other points coming from the same scanner and other scanners (with different C positions) to get the consolidated point cloud 6. In the figure 7 illustrates the virtual space 3, the first and the second C-ι, C2 scanner. Once the first 4 cloud and the second cloud are merged, they form the merged or consolidated point cloud 6. It is best to minimize the amount of point cloud points to have the most efficient system possible by limiting the amount of information to store. And so, avoid having multiple points that encode the same surface area of an object. But by default, there are many redundant points. This is clearly visible in FIG. 7, in which the fused point cloud 6 shows that parts of the virtual space 3 traced by the point clouds 4 (represented by points) and 5 (represented by lines) are coincidental and that points are redundant, we note on the cloud of points consolidated or merged 6 that points and features are on top of each other. To obtain the result of the consolidation 6 of the point clouds 4 and 5 the scanners Ci and C2 are placed at their correct position, in a coherent space, and suddenly this has displaced the point clouds 4 and 5 which are now placed in this coherent space forming the fused point cloud 6. The position of a point from a first scanner is calculated as follows P = Ci + D * d. Of which: P is the position of the point in the global space repository, Ci is the position of the scanner center in the global space repository, D is the direction of the standard fire radius for that point of the scanner in the repository of global space, and d is the distance between the center of the scanner and the point P. Some components of the apparent color of a point on an object depend on the angle of view of the virtual camera. This component is called in synthetic images the specular part of the rendering. To explain it simply, this part is equivalent to a reflection. So, the same point on the same virtual object with the same lighting will not look the same for two different virtual camera positions because of this reflection component. The points in the final point cloud have colors that have been calculated by virtual scanners with different centers. In the case of objects with important specular or just highly reflective material, we risk having inconsistencies by putting side by side two points that have been taken since two different scanners. Preferentially, we use a rendering technique for calculating the colors of the points that cheat on the direction calculation of the ray of view (the radius going from the point rendered to the camera). This is a method similar to that already used in synthetic images for stereoscopic rendering. The method used consists of calculating the color c of a point P to give as a direction D of the radius of vision, not the actual direction of the radius of vision going to the scanner which calculates this point P, but a radius which is points to the resting point R of the ZPV viewpoint area. All reflections and speculars are consistent, regardless of the 3D scanner that records it. Whatever the center of the scanner C, the point P will now always give the same reflected image. To note for the pure reflections of the mirror type, that there are other possible techniques. One of them is to make the mirror transparent and create on the other side of the mirror a 3D scene equivalent to that in front of the mirror, but reversed in the plane of the mirror. Note that it is also possible to add to the result of the 3D scanning a reflectivity attribute that would allow at the moment of rendering the point cloud to calculate in real time certain reflective components of the image. We want to minimize the amount of point cloud points to have the most efficient system possible. And so we want to avoid having multiple 3D points that encode the same surface part of an object. But by default we have a lot of redundant points. . . Preferentially, we use a method which determines for each scanned point by which other 3D scanners the corresponding surface of the object is visible, the scanner which is currently scanning this surface has the most qualitative point of view . If the current scanner is the most qualitative for this piece of surface, it is the 3D point of this scanner that will be kept for the endpoint cloud. If this is not the case, the 3D point is not included in the endpoint cloud, because it is the other 3D scanner that has a better quality on this piece of surface that will calculate a 3D point. Several variations can be used to determine the quality of vision of a piece of surface by different 3D scanners. Preferably, we use a formula combining two criteria, the distance d between the surface and the scanner, the greater the distance d, the less the quality is good. The angle β between the surface and the ray of vision coming from the scanner, the more the surface is perpendicular to the viewing angle, the better the quality. This preferred method determines an apparent surface of the object, as illustrated in Figure 8. The smaller this area, the more the scanner has a detailed view of this portion of the object and the quality of this scanning will be good. We use the formula surface_apparente = d * d / cos (ß). With the distance between the scanned point P and the center of the scanner, ß: the angle between the normal of the surface N and the radius of vision of the scanner. Note that this formula is valid for comparing the apparent surfaces of the same point between different scanners if they have the same angular definition. And so for the different 3D scanners that see the same portion of surface we choose to retain the point P 3D corresponding to the scanner that has the weakest surface_apparente. Note that the simplified formula surface_apparente = d * d also works. Another filtering method that we call ordered visibility can also be used, illustrated in Figure 9. The method consists of defining a priority order for the 3D scanners and executing the following algorithm for each scanned point: the point P is scanned visible from a 3D scanner that has a higher priority If so, let this 3D scanner with higher priority save the point. If no, it is the scanner in court that records it. In Figure 9 sees three centers of scanners rated C1, C2, and C3. For simplicity the representation is made in two dimensions. The order of priority of the scanners is equal to their number. So the scanner C1 has priority over the C2 which has priority over the scanner C3. A shows the surfaces that are kept for the C1 scanner. As he takes precedence over others, he keeps all the surfaces he can see. B shows the areas that are kept for the C2 scanner. The scanner C2 sees two areas that are not visible by the scanner C1. C shows the areas that are kept for the C3 scanner: Only the surfaces that will be kept by the C3 scanner, indeed, the rest of the surfaces that the scanner C3 can see are already seen by the C1 and C2 scanners that have a higher priority. Note that the two filtering methods: quality and orderly visibility can also be combined. In pre-calculated synthesized images, the color of a pixel is generally not calculated on the basis of a single ray shot, but rather a multitude of rays in the pixel surface. Each ray cast for a pixel corresponds to a sample to determine the color c of the pixel. Multi-sampling therefore consists in launching several rays for the same pixel and averaging the colors obtained for each ray so as to determine the final color of the pixel. Increasing the number of samples for one pixel significantly increases the rendering quality of this point P, especially in situations where the pixel PX is the edge of an object O. Similarly, to calculate the color c of a point P of the point cloud 9, in one direction, we can improve the quality of its rendering by employing several shots. FIG. 10 illustrates 4 shots from the scanner C for a pixel PX of the image I. This image is the image of the encoding of the point cloud. Note that this image I is different from the image of the rendering that the user sees when viewing the immersive videos with parallax created. The colors c and the depths p of the 4 points obtained by the 4 shots could be averaged. But in the case of recording depths, this average is a problem. Indeed, the different rays will often touch objects O at different distances for the same pixel PX of the scan, and more particularly at the edges of the objects O. If distances are averaged, we often obtain point depths (distance to the center of the scanner) that do not correspond to any point P in the virtual 3D scene. The problem is that according to a slightly different point of view, these 3D points will be problematic because they will appear suspended in a vacuum. This point suspended in the void is marked with an x in Figure 10. A point averaged in terms of color and position relative to the samples taken. Following the example in FIG. 10, P1 and P2 are two sampled points of an object O of blue color. P3 and P4 are two sampled points of an object O of red color. So, the color of the point x will be purple and its position the average of the positions of the points P1 and P2, and P3 and P4. When this point x is viewed from a camera placed in C, this is not a problem. From another point of view D, the point x does not correspond to an existing geometry in the virtual space 3 and this point x will appear as floating in the air. We can of course just store all these samples as so many colored dots, but in the end this is to increase the resolution of the scanner and increase the amount of data stored. And this requires more resources in storage, computing power, data transfer capacity, and point cloud display capability. Preferentially, we use a method that allows to correctly aggregate several samples that, once averaged from the point of view of their distance, do not create aberrant points with respect to the virtual objects. This preferential variation consists in using a Clustering method, which groups the samples into different groups that are spatially coherent with respect to their depth. If we then average the depth of the samplings of the same group, we obtain a spatial position that is consistent with the rendered virtual objects. Several clustering methods are possible, but we prefer to use the method of: "Jenks natural breaks" to create the groups: Jenks, George F. 1967. "The Data Model Concept in Statistical Mapping", International Yearbook of Cartography 7: 186- 190. It is a one-dimensional clustering method and we use as data for the ranking the depth of the samples. We run this algorithm asking him to create 2 groups then 3 groups. We then compare the quality of the clusters to determine whether, in the end, the samples are better ranked in one group, in two groups or in three groups. In order to compare the quality of the grouping, several methods are possible, but we preferentially use the Pham method: Selection of K in K-means clustering, D T Pham, S S Dimov and C D Nguyen, Proc. IMechE Vol. 219 Part C: J. Mechanica! Engineering Science. We finally have one, two or three groups of samples, and we can average the colors of these samples within a group, as well as their depth without having incoherent average positioning problems. Each of these groupings will correspond to a colored dot of the point cloud where only the nearest or largest group is used. FIG. 11 illustrates the grouping of the samples for a pixel PX of the image I encoding the 3D scanning. This is the central point of the scanner and different points P represent the points of intersection between the firing radii for a pixel of the scanner and the virtual objects. 11a illustrates the case in which the pixel PX sees two different objects and where two groups of samples are obtained, one for the front object and one for the rear object. Figure 11b illustrates the distribution of samples into three groups. Preferably, we separate in groups only the samples that require it in relation to their distance. The general principle is that a distance of 5 cm from the depths of the samples at a distance of 1 km will not be discernible from any point of view within the ZPV point of view zone. cm to 50 cm distance will strongly. So we can calculate for each distance to the scanner, what is the depth difference between the samples where all these can be put together without this being discernible from another point of the ZPV point of view. To the extent that we create a scatter plot that will only be seen from a limited ZPV viewpoint area, we can calculate the maximum distance difference that can be between the final view point position and the center of each spherical 3D scan, call this distance: 5_PDV (delta point of view), visible in Figure 12. The resolution of our immersive video with interactive parallax is equal to the resolution of spherical 3D scanners and therefore the number of points they will capture for 360 degrees. This resolution can be expressed as an angular resolution giving the number of degrees per point: angle_dimension = 360 / (number_of_horizontal_points) in degrees per point. For example, for a definition of 3600 points, this makes an angular definition of 0.1 degrees per point. In FIG. 12 it can be noted that C is the center of the 3D scanner, V is the position of the point of view that is as far as possible in the viewing zone with respect to the center of the scanner, 5PDV is the distance between C and V, pmax corresponds to at the distance from C of the furthest sample, pmin is the distance to C of the nearest sample, amin is the angle formed between the line V and C and Its line from V to pmin, amax is the angle formed between the line going from point V to point C and the straight line going from V to pmax, δ_α is the angle difference between amax and amine. We can then calculate: amin = arctangent (pmin / 5_PDV), amax = arctangent (pmax / 5 "PDV), and δ_α = amax - amin. And so if the samples are between a z_max and a z_min such that δ_α <0.5 * angular_definition. Samples can be grouped together without further grouping calculations because the difference in distance will not be perceived from the user's point of view. The density of the point cloud is consistent with the ZPV viewpoint area, shown in Figure 13, a surface remote from the ZPV viewpoint area shows three points and a near surface shows more than fifteen points. The density of the point cloud for a distant surface at this time is low and the different points are widely separated from each other. The density of the cloud of points the near surface is much higher, and therefore it is better defined. A simple rendering method can be used, it consists in creating for each point a small plate. The color of this plate is the color of the associated point. The position of the plate is that of the associated point. The size of the plate is proportional to either the distance between the center of the scanner that created the associated point and the plate, or the distance between the eyes of the user and the plate. Both methods are usable. The orientation of the plate is constrained to face either the center of the scanner that created the associated point, or the eyes of the user. Again both methods are usable. With game graphics cards such as Nvidia GTX780ti, it is easy to achieve in 2014 a real-time rendering for several tens of millions of points rendered by this method, which is more than enough to obtain a definition sufficient for the rendered images to be sufficiently defined and pleasant for the user. For each scanned image of our immersive video with interactive parallax, corresponding to a scatter plot, we carry out the following steps in order to propose the visualization method: a) Preferentially, determine which sub-part of the point cloud will be visible by the user Indeed, the point cloud encodes the entire environment around the user, but it will be able to perceive only a part, the viewing angle of the user being restricted. b) Load only the points of the point cloud corresponding to the visible part. Preferably, our scanned image is stored in a form that splits the point cloud. Each slice corresponds to a slice of the scan directions. For example, a slice can encode the points seen by the various corresponding scanners at latitudes of 0 to 45 degrees and longitudes of 90 to 125 degrees. c) Determine the position and orientation of the two eyes of the user at the moment when the image will be presented to the user: given by the sensors of the virtual reality installation (generally a helmet 13) and the algorithms for predicting head movements. d) Then we will render in real time two images, one on the left G and the other on the right D of the colored 3D point cloud and present them to the eyes of the user. Generally also by a virtual reality headset, visible in Figure 15. Once the two stereoscopic points of view are defined and the visible part of the point cloud selected and loaded into memory, we must now render this 3D scene consisting of a cloud of points from the two points of view corresponding to the eyes. of the user. Numerous methods have been studied and are possible to render in real time its point clouds. Some use normal and surface size information, others just point positions. Some methods use plates or other geometric shapes for each point, while others rebuild geometry from the scatter plot. Some examples include: Ruggero Pintus, Enrico Gobbetti, and Marco Agus: Real-time Rendering of Massive Unstructured Raw Point Clouds Using Screen-Space Operators from: The 12th International Symposium on Virtual Reality, Archeology and Cultural Heritage VAST (2011), Gael Guennebaud , Loïc Barthe, Mathias Paulin: Interpolatory Refinement for Real-Time Processing of Point-Based Geometry, de. Eurographics 2005, Dublin, Ireland, and M. Levoy and T. Whitted. The use of points as display primitives. TechnicalReport Technical Report TR 85022, The University of North Carolina at Chapel Hill, Department of Computer Science, 1985. Note that in our case, we do not illuminate the rendering of the cloud of points, the colors encoded in the points are directly those that will be presented to the user, there is no lighting. Given the large amount of methods available for the rendering of point clouds in our case, we will not describe in detail it, our invention does not relate to a new method of rendering point clouds. A particularity of the point clouds generated with the methods of our invention is that their density is consistent with the potential positions in the user's ZPV point-of-view area. This greatly facilitates their rendering. We present the method of recording in real shooting, with the physical equivalent of our methods in computer graphics. The generation of scanned images from multiple 3D scanners can also be done using a set of real-time 3D color scanners, see Figure 14. All previous discussions on the number and layout of virtual 3D scanners are apply to real color 3D scanners. The technologies currently available that would allow to record immersive videos with interactive parallax are to our knowledge the following: camera triangulation and structured light source, this is the method used by Kinect Version 1 of Microsoft. Time-of-flight camera is the method used by Kinect Version 2 of Microsoft. Triangulation reconstruction methods from several "normal" cameras. We do not mention the 3D Lidars scanners here because they are very precise but are not fast enough at the moment to capture a minimum of 24 images per second for a full spherical scan. Note that unlike the computer graphics methods, the various colored 3D scanners will inevitably be seen together, Figure 14. This is not a problem, because the parts that are hidden by a scanner A for another scanner B will then be recorded by scanner B. Scanners may suffer from occlusion between themselves, Figure 14. This figure shows an example with 4 physical scanners having a ball size. A, B, C, D are the four physical 3D scanners represented with their bulk. An object O which is obscured by the scanner B to the scanner A does not pose a problem, by the quality rule related to the pure distance, this object O being closer to the scanner B than the scanner A, its surface will be anyway retained for the scanner B. From Figure 16 we note that for each scanner we scan all directions (360 degrees horizontally and 180 degrees vertically). We can encode the result of scanning for a scanner C in a time t in the form of an image I. Each pixel of the image corresponds to a direction of fire and encodes both the color of the point as well as a piece of information. the distance between this point and the center of the scanner. The classical method to represent all the directions of scanning is to encode it on the image, with the longitude for the abscissa (X), and for the ordinate the latitude (Y). This projection system is often called equirectangular. The problem with this method is that it is not homogeneous from the point of view of the surface represented by each pixel. Indeed, a pixel near the pole will encode an area much smaller than a pixel near the equator. We therefore preferentially use another method of representing a spherical projection on an image. This so-called ecospheric method makes it possible to have good homogeneity of the surfaces corresponding to each of the pixels. This method also completely covers all firing directions, but does not present the problem of "pinching" at the poles. The ecospheric method preserves the principle of encoding latitude in ordinate, and longitude in abscissa. But the relationship between longitude and the abscissa is no longer linear. The principle consists in calculating for each line of the image (therefore for each latitude), the equivalent circumference of the circle which is represented by this line. As a line in the image represents a horizontal section of the sphere, it gives a circle on the horizontal plane of section. This circumference is therefore, on the basis of a sphere of radius 1 of sin (a) * 2 * PI. With a latitude angle starting from the North Pole. (So at the north pole a = 0 degrees, at the equator a = 90 degrees and at the south pole: a = 180 degrees). The ratio of this circumference to the circumference at the equator is therefore simply sin (a). For each line of the image l, the ecospheric method limits the number of horizontal points of each line to a definition equal to the horizontal total definition of the image * sin (a). The rest of the line remains black. Each point on the same sign is separated by the same angle increment δ_β. But this increment of angle δ_β varies according to the line, again in relation to sin (a). According to the formula: δ_β = 360 degrees / (resolution_horizontal_of_image * sin (a)). Note that the total horizontal resolution of the image is only used at the equator, all other latitudes use fewer points. The concept of ecospheric projection is preferentially used for each of the scanners. In Figure 16 a slice of the sphere corresponding to all longitudes for latitude a corresponds to a circle 20, a circle 30 is the edge of the sphere corresponding to all longitudes for latitude corresponding to the equator (a = 90 degrees). The projection 40 is the circle seen from above corresponding to the equatorial latitude, its radius by definition is 1 and its circumference 2 pi, the projection 50 is the circle seen from above corresponding to the latitude a, its radius R is sin (a) and its circumference of 2 Pi * sin (a). C is the center of the sphere, I is the corresponding ecospheric image. Line 90 is the pixel band corresponding to the latitude a. Note that it does not take the full width of image I, in fact it only takes horizontal definition * sin (a). Line 100 is the pixel band corresponding to the latitude at the equator, which takes the full width of the image, and 110 is an enlargement in the image. The reference 120 is one of the pixels of the band 90, and all the others of the same line correspond to a difference of angle on the circle 50 of δ_β = 360 degrees / (horizontal_design / sin (a)) · We also preferentially use this method, but with a variation, or we separate in a certain number of slice Horizontal image, typically 8. This allows later, to quickly access the information corresponding to certain slices of the sphere. Note that when computing the scatter plot from each scanner additional information can be stored per point in addition to the distance to the scanner center and color. For example, the average direction of the normal of the surface corresponding to the point, the size possibly of this surface, information on the connectivity with the adjacent points. The ZPV viewpoint area can be moved by the video director just as it would move a single point of view camera, referring to Figure 17. The analogy previously taken of the passenger in a car remains valid. User 1, like the passenger of the car, does not control his overall movement, but he can look in the direction he wants. So, unlike virtual reality or, at the time of rendering in real time, a camera is moved in a virtual space 3 when there is displacement of the user 1. In the invention is the space 3 which moves around user 1 when the ZPV viewpoint area has been moved during the creation of the immersive movie. One can imagine the system as recreating around the user 1 at the time of the visualization, a 3D virtual scene for each fraction of time of the immersive video. Each of these ephemeral virtual scenes is limited to what the user 1 can see from the predetermined ZPV viewpoint area. The evolution of the appearance of these scenes corresponds to the movements of the objects or characters in the video and to the displacement of the position of the zone of viewpoints ZPV controlled by the director of the film. Thus, in FIG. 17 a scene 3 is scanned from a first position of the ZPV point of view zone in a first time t-ι, and then it is again scanned from a second position of the zone of ZPV points of view in a second time t2. So at ti a first cloud of points 9 is obtained, and then at t2 a second cloud of points 9 is obtained. For user 1, the scene is moved. Its advantages of sufficient accuracy according to the position of the ZPV point of view zone are also noted, the objects which are closer show a number of high points whereas the objects which are farther away show a number of points limited to only some. In the figure, the thicker lines show a higher dot density than the thinner lines, the latter having a lower dot density. Real-world recording methods, via a three-dimensional virtual representation. This method then takes place in two steps: a) Scan the different real elements to create a virtual 3D scene (possibly with entirely virtual parts), b) Use the methods of the invention to create from this scene an immersive video with interactive parallax (via multiple virtual spherical color scanners as previously described). In this approach, Lidar type scanners can be used conventionally to scan fixed sets. These are then reconstituted in a virtual 3D scene. We also note that in this approach, it is not absolutely necessary for moving characters or objects to do multiple physical spherical scans. For example, several time-of-flight cameras synchronized with 90 degrees of aperture can be used to record the movements of an actor in a direction relative to the center of the point-of-view area. This direction can even vary, you can rotate the set of cameras to flight time to follow the evolution of the actor, as we would pan with a normal camera. Then again the actor is transformed into an animated 3D object. Nothing prevents us from superimposing on our stereoscopic spherical video with pre-calculated or prerecorded interactive parallax computed elements in real-time computer-generated images or even recorded directly in real images on site or remotely. The possibilities are numerous. We will explore some that seem interesting. The user will sooner or later look in a direction where he should see his own body. We could at the time of creation of the stereoscopic spherical video with interactive parallax, have a body present in the rendering or recording at the place where the user's body should be, but this one will certainly be different from the real body of the user (from the point of view clothing, body, etc ...) and in a position different from that of the real body of the user. The absence of vision of one's body or an inconsistency between the appearance or posture of the user's body and the vision that it has of it reduces the feeling of immersion. The brain of the user considering as abnormal the fact of not seeing his own body correctly. In the context of immersive video and the use of virtual reality headsets, several methods can be employed so that the user can see his own body. The simplest would be to mount two normal cameras to actual shooting, on the virtual reality headset, pointing to the front of the helmet. But the point of view of the two cameras is necessarily different from that of the eyes of the user. In effect these cameras can not physically be placed at the place of the eyes of the user. The images that will result will not be perfectly consistent with the vision that the user is used to having on his own body, which as we have seen is not good for the quality of the immersion. Another solution is to place two cameras "scanners" type Kinect 1.0 or 2.0 from Microsoft. From the information coming from these "scanner" cameras, a colored 3D model of the user's body can be created and rendered in real time from the point of view of the user's eyes. Yet another technique consists in previously scanning the user with a "scanner" camera, to define the color of his clothes, his corpulence and more generally a colorful 3D model of the user. Then we put a camera "scanner" in front of the user. This "scanner" camera makes it possible to analyze the movements of the user. Software then distorts the user's 3D model to reproduce its movements in real time. This kind of techniques for analyzing the movements of a user is commonly used on video game consoles such as Microsoft's XBox 360 or XBox One for example. Here it is not a question of increasing the sensation of immersion, but of giving a social aspect to the immersive video experience. The idea is to be able to directly share the immersive video experience with one or more other users. This experience sharing can be done from the visual point of view by giving the possibility to two users to see each other's body. Thus a user can point something to attract the other user's attention. Users can also see the other user's body language reactions: surprise, laugh, curiosity, fear, etc. it should be noted that users do not have to be physically next to each other to share the experience. Some of the methods we will describe to achieve visual interaction between users can be done remotely. From the visual point of view the techniques to allow users to see each other are equivalent to those for the user to see his own body. Normal cameras or "scanners" are mounted on the virtual reality headset. With these techniques, users must be physically next to each other. Note that the infrastructure related to the virtual reality headset for a user (the headset itself but also its connectivity to the computer, as well as often a kind of cage to prevent the user from falling) is likely to be visible to another user 1. The method of scanning each user beforehand and then reconstituting his movements in the case of multiple users more benefits because it does not require that users are physically side by side and in addition there is no problem of "Cleaning" of infrastructure. Informative visual elements can be added in real time when viewing an immersive video. The direction of the user's gaze is known at all times and a database can encode the names and characteristics of the objects or characters according to the direction in which they are at this or that moment of the video. Thus the user when he gives the command to obtain visual information in the form of icons and / or text that informs him about what he is looking at. For example, he can get the name of any person he looks at in the immersive video, the species of an animal, the name of a place, and so on. Any fixed or animated virtual object can be added in real time to the immersive video. We see two major interests to these additions. The first is to allow an interaction of the user with the scene in which he is immersed. A playful example is a shooting game in the form of a "ride". The user is moved into the immersive movie and added targets in real time appear. The user then has the possibility via a pointer system or just by his eyes to pull on these cables. This is where the real time comes in: if the target is hit, it will change its appearance or disappear. Note that targets can be simple objects that can be easily calculated in real-time hyper-realism and the sets are part of the pre-calculated or recorded immersive film. All will give a realism unattainable by the classic virtual reality (which calculates the whole image in real time) The second is to add visual elements that must vary from one reading to another immersive video. For example advertising posters in the immersive video, which we want to be able to vary depending on the country in which the video is viewed, the nationality or the language of the user, depending on the profile of the user or according to the last ad campaign that just came out. When playing the immersive video, the user can be placed on a system recreating sensations of acceleration. Generally these systems are seats or cabs mounted on sets of jacks that can move the seat or the cabin. These methods are conventionally used in airline flight simulators to train their pilots. When the user is on a seat of this type, the movements of the seats will give feelings of acceleration to the user. If for example the seat leans backwards, the user will feel the gravity (and therefore an acceleration) to the rear and have the impression to accelerate. During the creation of the immersive video, the accelerations experienced by the camera during these displacements and especially during these changes of directions or speed, can be measured for a real camera and calculated for a virtual camera. If these recorded accelerations are reproduced for example by a cab on cylinders, the user will feel acceleration consistent with the movements of his point of view in the immersive video and thus greatly increase its feeling of immersion. The user can be transported in a nacelle (for example roller coaster or ferris wheel) the displacement of the nacelle being sent to the display system to synchronize the unfolding of the immersive film with interactive parallax with the displacement of the nacelle. Hearing is also a very important sense. We perceive the spatial position of sound sources. If for example someone behind you calls you you perceive that the sound comes from behind you and you will turn around. You can even usually determine the distance and height of the sound source. Spatialization of the sound is the fact of being able to give the impression to a user that a sound is emitted from a pre-determined location in relation to its head. Currently, computer systems exist to properly spatialize sounds in real time for a user wearing headphones on their ears. For example: Microsoft's DirectSound3D. In order for user 1's immersion sensation to be more complete, his sound perception must also be consistent with the scene he is viewing in the immersive video. For example imagine a sound whose source is right in front of the user 1 in the scene of the immersive video. If the user 1 looks straight ahead, the sound source with respect to the head and the ears of the user 1 must be spatialized straight ahead. Now if user 1 turns his head to the right, the sound source must now be spatially to the left in relation to the user's head and ears. We therefore see that depending on the orientation of the user's head and the desired position of the sound source in the immersive video, the sound must be recalculated in real time to give a good feeling of immersion. Note that in the case of multiple users, users could talk via microphones and the sound from other users could also be spatialised to come from the image of their body.
权利要求:
Claims (16) [1] 1. Method for obtaining immersive videos with interactive parallax characterized in that it comprises: a) the determination of the shape and size of a point-of-view zone (ZPV) centered around a point of view at rest (R), b) determining the number of scanners forming a set of scanners (2) with respect to the viewpoint area (ZPV), c) scanning a space (3) via the set of scanners (2) each of which determines a cloud of points (9), d) merging the point clouds (9) into the same space to obtain a merged point cloud (6), f) encoding the merged point cloud (6) in a special image (I), and g) obtaining a video by binding together all the special images (I) at a determined frequency, in particular twenty five frames per second. [2] The method of obtaining immersive videos with interactive parallax according to claim 1, which method comprises eliminating the redundant points in the merged point cloud (6) to obtain a filtered point cloud (8) by determining an apparent surface. of an object and / or by setting an order of priority for the set of scanners (2). [3] 3. Method of obtaining immersive videos with interactive parallax according to claim 1 or 2, wherein said area of viewpoints (ZPV) comprises a predetermined volume in which are understood the positions attainable during the translation movements of the head of a user (1) when it is maintained in a fixed position. [4] 4. Method of obtaining immersive videos with interactive parallax according to any one of claims 1 to 3, wherein the set of scanners (2) can be either virtual scanners or physical scanners. [5] 5. Method for obtaining immersive videos with interactive parallax according to any one of claims 1 to 4, wherein the set of scanners (2) can be color scanners whose each scanner has a field of view of 360 degrees horizontally and 180 degrees vertically. [6] The method of obtaining immersive videos with interactive parallax according to any one of claims 1 to 5, wherein the scanners are placed at least at the extreme points (7) of the point-of-view zone (ZPV) and at the point vision at rest (R). [7] 7. Method of obtaining immersive videos with interactive parallax according to any one of claims 1 to 6, wherein for each point (P) scanned at least a distance (d) and a color (c) of this point (P). ) with respect to the central point (C) is obtained. [8] 8. Method of obtaining immersive videos with interactive parallax according to claim 7, wherein a depth (p) of a point (P) scanned is the distance between the central point (C) and said point (P) scanned. [9] 9. Method of obtaining immersive videos with interactive parallax according to claim 2, wherein a scan quality of the set of scanners (2) is determined by retaining the point (P) corresponding to the scanner having the most apparent surface. low. [10] 10. Method of obtaining immersive videos with interactive parallax according to claim 1, wherein the encoding of the fused point cloud (6) or filtered (8) is ecospherical. [11] 11. Method for obtaining immersive videos with interactive parallax according to any one of claims 1 to 10, wherein at least the scene (3) is scanned by the set of scanners (2) from a first position in a first step (t-ι), and it is scanned by the set of scanners (2) from a second position in a second time fo). [12] 12. Method for viewing immersive videos with interactive parallax, characterized in that it comprises for each of the special images (I) of an immersive video with interactive parallax: a) the determination of a position and an orientation of the eyes of a user (1), by sensors, and the use of head movement prediction algorithms, b) the determination of a part of a consolidated point cloud (6) or a scatter plot filtered (8) according to a viewing angle of the user (1), c) loading the portion of the consolidated point cloud (6) or the filtered point cloud (8) visible to the user {1), d) rendering in real time two images (I) of the part of the consolidated point cloud (6) or the filtered point cloud (8) loaded, and e) the presentation of the rendering in the eyes of the user (1). ). [13] The immersive video viewing method with interactive parallax according to claim 12, wherein all the points of the consolidated point cloud (6) or the filtered point cloud (8) are loaded. [14] The immersive video viewing method with interactive parallax according to claim 12 or 13, wherein the position and orientation of the user's eyes (1) are given by sensors located in a virtual reality headset (9). . [15] 15. immersive video viewing method with interactive parallax according to any one of claims 12 to 14, wherein the presentation to the eyes of the user (1) is performed through the headset (9) of reality Virtual. [16] 16. immersive video viewing method with interactive parallax according to any one of claims 12 to 15, wherein the user (1) is transported in a nacelle, the displacement coordinates of said nacelle being sent to a display system to synchronize the unfolding of an immersive film with interactive parallax with the displacement of the nacelle.
类似技术:
公开号 | 公开日 | 专利标题 US10924817B2|2021-02-16|Cinematic mastering for virtual reality and augmented reality Häkkinen et al.2008|Measuring stereoscopic image quality experience with interpretation-based quality methodology JP2019535150A|2019-12-05|Holographic sensory data generation, manipulation and transmission system and method US10939084B2|2021-03-02|Methods and system for generating and displaying 3D videos in a virtual, augmented, or mixed reality environment Prince2011|Digital Visual Effects in Cinema US20170286993A1|2017-10-05|Methods and Systems for Inserting Promotional Content into an Immersive Virtual Reality World JP2009515375A|2009-04-09|Operation to personalize video US20150123965A1|2015-05-07|Construction of synthetic augmented reality environment CN101208723A|2008-06-25|Automatic scene modeling for the 3D camera and 3D video US11262590B2|2022-03-01|Video generation method and apparatus BE1022580B1|2016-06-09|Method of obtaining immersive videos with interactive parallax and method of viewing immersive videos with interactive parallax CN103761763B|2017-01-04|For the method using precalculated illumination to build augmented reality environment CA3022298A1|2017-11-02|Device and method for sharing an immersion in a virtual environment FR3056770A1|2018-03-30|DEVICE AND METHOD FOR IMMERSION SHARING IN A VIRTUAL ENVIRONMENT Parikh et al.2018|A Mixed Reality Workspace Using Telepresence System Thatte2020|Cinematic virtual reality with head-motion parallax FR2964776A1|2012-03-16|METHOD FOR ESTIMATING LIGHT DISTRIBUTION IN A HOMOGENEOUS ENVIRONMENT Scandolo et al.2019|Gradient‐Guided Local Disparity Editing Elliott2015|The Hobbit: The Desolation of Smaug-A New Era of Realism? FR3000351A1|2014-06-27|Method for utilizing immersive video of multimedia file on e.g. smart phone, involves holding displaying of immersive video such that display retains orientation fixed on orientation of site during changing orientation of display device Hussain2017|Stereoscopic, Real-time, and Photorealistic Rendering of Natural Phenomena--A GPU based Particle System for Rain and Snow Selan2003|Merging live video with synthetic imagery CH711803A2|2017-05-31|Interactive mirror surface interaction method FR3050834A1|2017-11-03|DEVICE AND METHOD FOR IMMERSION SHARING IN A VIRTUAL ENVIRONMENT Lupton2012|Proposing Holograms as an Innovative Exhibition Technology for Egyptian Mummies
同族专利:
公开号 | 公开日 WO2016061640A1|2016-04-28| CN107079145A|2017-08-18| JP6595591B2|2019-10-23| CN107079145B|2019-08-16| US20180115769A1|2018-04-26| CA2965332C|2019-07-23| KR20170074902A|2017-06-30| JP2017539120A|2017-12-28| EP3210378A1|2017-08-30| BE1022580A9|2016-10-06| KR102343678B1|2021-12-27| AU2015336869B2|2019-06-06| CA2965332A1|2016-04-28| AU2015336869A1|2017-05-25| BE1022580A1|2016-06-09| EP3210378B1|2020-11-04| US10218966B2|2019-02-26|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20130002827A1|2011-06-30|2013-01-03|Samsung Electronics Co., Ltd.|Apparatus and method for capturing light field geometry using multi-view camera| US20130083173A1|2011-09-30|2013-04-04|Kevin A. Geisner|Virtual spectator experience with a personal audio/visual apparatus| US20140172377A1|2012-09-20|2014-06-19|Brown University|Method to reconstruct a surface from oriented 3-d points| US20140098183A1|2012-10-10|2014-04-10|Microsoft Corporation|Controlled three-dimensional communication endpoint| JP4608152B2|2000-09-12|2011-01-05|ソニー株式会社|Three-dimensional data processing apparatus, three-dimensional data processing method, and program providing medium| IL139995A|2000-11-29|2007-07-24|Rvc Llc|System and method for spherical stereoscopic photographing| JP2002171460A|2000-11-30|2002-06-14|Sony Corp|Reproducing device| JP2002354505A|2001-05-29|2002-12-06|Vstone Kk|Stereoscopic system| US9451932B2|2004-12-30|2016-09-27|Crystalview Medical Imaging Limited|Clutter suppression in ultrasonic imaging systems| US20070236514A1|2006-03-29|2007-10-11|Bracco Imaging Spa|Methods and Apparatuses for Stereoscopic Image Guided Surgical Navigation| JP5430572B2|2007-09-14|2014-03-05|インテレクチュアルベンチャーズホールディング67エルエルシー|Gesture-based user interaction processing| WO2011142767A1|2010-05-14|2011-11-17|Hewlett-Packard Development Company, L.P.|System and method for multi-viewpoint video capture| JP2012079291A|2010-09-08|2012-04-19|Namco Bandai Games Inc|Program, information storage medium and image generation system| US9055886B1|2011-01-05|2015-06-16|Sandia Corporation|Automatic tool alignment in a backscatter x-ray scanning system| US9007430B2|2011-05-27|2015-04-14|Thomas Seidl|System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view| CA2848216C|2011-09-09|2018-05-15|Accipiter Radar Technologies, Inc.|Device and method for 3d sampling with avian radar| EP2791909A4|2011-12-16|2015-06-24|Thomson Licensing|Method and apparatus for generating 3d free viewpoint video| US9324190B2|2012-02-24|2016-04-26|Matterport, Inc.|Capturing and aligning three-dimensional scenes| CA2902430C|2013-03-15|2020-09-01|Uber Technologies, Inc.|Methods, systems, and apparatus for multi-sensory stereo vision for robotics| US9435889B2|2013-12-14|2016-09-06|Process Metrix|Caster mold measurements using a scanning range finder|US10554931B1|2018-10-01|2020-02-04|At&T Intellectual Property I, L.P.|Method and apparatus for contextual inclusion of objects in a conference| WO2021052799A1|2019-09-19|2021-03-25|Interdigital Ce Patent Holdings|Devices and methods for generating and rendering immersive video| CN112601065B|2019-10-01|2022-02-11|浙江大学|Method and device for reconstructing high-freedom video viewable range|
法律状态:
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 BE20145025A|BE1022580A9|2014-10-22|2014-10-22|Method of obtaining immersive videos with interactive parallax and method of viewing immersive videos with interactive parallax|BE20145025A| BE1022580A9|2014-10-22|2014-10-22|Method of obtaining immersive videos with interactive parallax and method of viewing immersive videos with interactive parallax| CA2965332A| CA2965332C|2014-10-22|2015-10-22|Method for collecting image data for producing immersive video and method for viewing a space on the basis of the image data| PCT/BE2015/000056| WO2016061640A1|2014-10-22|2015-10-22|Method for collecting image data for producing immersive video and method for viewing a space on the basis of the image data| JP2017522200A| JP6595591B2|2014-10-22|2015-10-22|Method for collecting image data for the purpose of generating immersive video and spatial visualization method based on those image data| CN201580060675.8A| CN107079145B|2014-10-22|2015-10-22|For collecting the method for the image data for generating immersion video and for the method based on image data viewing space| EP15798312.3A| EP3210378B1|2014-10-22|2015-10-22|Method for collecting image data for producing immersive video and method for viewing a space on the basis of the image data| US15/520,891| US10218966B2|2014-10-22|2015-10-22|Method for collecting image data for producing immersive video and method for viewing a space on the basis of the image data| AU2015336869A| AU2015336869B2|2014-10-22|2015-10-22|Method for collecting image data for producing immersive video and method for viewing a space on the basis of the image data| KR1020177012479A| KR102343678B1|2014-10-22|2015-10-22|Method for collecting image data for producing immersive video and method for viewing a space on the basis of the image data| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|